Concept | Explanation | Command / Example |
---|---|---|
Terraform CLI Install | Install Terraform on your local machine or CI/CD environment. | On Linux (example): |
Providers | Plugins that allow Terraform to interact with various APIs and services (e.g., Google Cloud, AWS, Azure). Each must be configured (version, credentials, etc.). | Declared in .tf files (example): |
Initializing | Downloads providers and sets up the working directory so Terraform is ready to run. Must be done before plan/apply. | terraform init |
Planning | Checks your configuration, compares it to remote state, and shows the changes that will be made. Helps avoid surprises. | terraform plan |
Applying | Applies changes to match your desired configuration. Will prompt for confirmation unless you use -auto-approve. | terraform apply |
Destroying | Destroys all resources from the current configuration. Useful for cleanup. | terraform destroy |
State Files | Terraform maintains a local or remote state file (terraform.tfstate) that stores a mapping of real-world resources to your configuration. This file is crucial for understanding current infrastructure state and for making incremental changes. Handle state carefully and keep it secure. | Typically stored in local disk by default, or in remote backends (like GCS). Never commit sensitive state to version control. |
Backends | Mechanisms to store Terraform state remotely and securely (e.g., in Google Cloud Storage, AWS S3, Terraform Cloud). Enables collaboration and state locking so multiple team members can work safely. | Example in backend.tf: |
Variables | Allow parameterizing your Terraform configuration, making it flexible and reusable across different environments. | Declared in .tf files or .tfvars (example): |
Outputs | Expose key information from your configuration for easy reference. Often used to pass values to external systems or to quickly see addresses of new resources. | hcl |
Resources | The fundamental blocks to create and manage infrastructure (e.g., compute instances, storage, networking). Each resource has a type and configuration arguments. | hcl |
Data Sources | Read information defined outside of Terraform or managed by a different configuration. Allows data lookups without managing those resources. | hcl |
Modules | Group resources together into reusable packages. Encouraged for organizing bigger projects or sharing standardized setups across teams. | hcl |
Workspaces | Let you maintain multiple state files in a single directory structure. Useful for multi-environment setups (e.g., dev, test, prod). | terraform workspace new dev |
Version Locking | Ensures consistency by specifying allowed Terraform and provider versions. Protects your environment from accidental upgrades. | hcl |
Formatting & Validation | Keeps your configs clean and ensures correctness. | terraform fmt to format code |
Importing Resources | Bring an existing resource under Terraform management without destroying and re-creating it. State is updated to reflect the existing resource configuration. | terraform import google_compute_instance.vm_instance my-project/us-central1-a/my-instance |
Lifecycle Hooks | Control how resources are created, updated, or destroyed via the lifecycle block (e.g., prevent_destroy = true, or ignoring changes in certain arguments). | hcl |
Below is a simple structure you could use. Create a directory, e.g. my-terraform-project/
and place these files inside it:
my-terraform-project/
├── provider.tf
├── variables.tf
├── main.tf
├── outputs.tf
└── backend.tf (optional: if using remote state in GCS)
provider.tf
Configures the Google provider to point to the right project, region, and zone.
provider "google" {
# project, region, and zone will be read from the variables
project = var.project_id
region = var.region
zone = var.zone
}
variables.tf
Defines variables so you can parameterize your config.
variable "project_id" {
type = string
description = "GCP Project ID"
}
variable "region" {
type = string
description = "GCP region"
default = "us-central1"
}
variable "zone" {
type = string
description = "GCP zone"
default = "us-central1-a"
}
variable "network_name" {
type = string
description = "Name for the VPC network"
default = "my-vpc"
}
variable "subnet_name" {
type = string
description = "Name for the subnet"
default = "my-subnet"
}
variable "machine_type" {
type = string
description = "Instance machine type"
default = "e2-micro"
}
variable "instance_name" {
type = string
description = "Name for the compute instance"
default = "my-vm"
}
main.tf
Core Terraform resources for a VPC, subnet, firewall rule, and a Compute Engine instance.
# Create a VPC network
resource "google_compute_network" "vpc_network" {
name = var.network_name
project = var.project_id
auto_create_subnetworks = false
}
# Create a subnet in that VPC
resource "google_compute_subnetwork" "vpc_subnet" {
name = var.subnet_name
ip_cidr_range = "10.0.0.0/24"
region = var.region
network = google_compute_network.vpc_network.self_link
project = var.project_id
}
# Create a firewall rule to allow SSH
resource "google_compute_firewall" "allow_ssh" {
name = "allow-ssh"
project = var.project_id
network = google_compute_network.vpc_network.self_link
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
}
# Create a Compute Engine instance
resource "google_compute_instance" "vm_instance" {
name = var.instance_name
machine_type = var.machine_type
project = var.project_id
zone = var.zone
boot_disk {
initialize_params {
image = "projects/debian-cloud/global/images/family/debian-11"
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
subnetwork = google_compute_subnetwork.vpc_subnet.self_link
# Ephemeral external IP
access_config {}
}
}
outputs.tf
Outputs let you easily reference attributes of created resources (e.g., instance IP).
output "instance_name" {
description = "Name of the compute instance"
value = google_compute_instance.vm_instance.name
}
output "instance_external_ip" {
description = "External IP of the compute instance"
value = google_compute_instance.vm_instance.network_interface[0].access_config[0].nat_ip
}
backend.tf
Configures a remote backend to store Terraform state in a Google Cloud Storage (GCS) bucket. This is optional: if you prefer local state, skip this file.
terraform {
backend "gcs" {
bucket = "my-terraform-state-bucket"
prefix = "terraform/state"
# Optionally specify the project to bill for storage
project = "my-gcp-project"
}
}
Open your terminal and run the following commands (assuming you have access to a Kubernetes cluster):
1. Run terraform init
to initialize the working directory.
2. Optionally run terraform plan
to see the changes Terraform will make.
3. Run terraform apply
to create the resources.
4. Run terraform destroy
to remove them when finished.